118 research outputs found

    (pseudo)Scalar mesons in a self-consistent NJL model

    Full text link
    In this study, we investigate the mass spectrum of π\pi and σ\sigma mesons at finite chemical potential using the self-consistent NJL model and the Fierz-transformed interaction Lagrangian. The model introduces an arbitrary parameter α\alpha to reflect the weights of the Fierz-transformed interaction channels. We show that when α\alpha exceeds a certain threshold value, the chiral phase transition transforms from a first-order one to a smooth crossover, which is evident from the behaviors of the chiral condensates and meson masses. Additionally, at high chemical potential, the smaller the value of α\alpha, the higher the masses of the π\pi and σ\sigma mesons become. Moreover, the Mott and dissociation chemical potentials both increase with the increase in α\alpha. Thus, the meson mass emerges as a valuable experimental observable for determining the value of α\alpha and investigating the properties of the chiral phase transition in dense QCD matter.Comment: Accepted by Chinese Physics

    The asymmetric effect of infectious disease equity market volatility for the physical education economy: implication for a post-Covid world

    Get PDF
    Due to the growing importance of the sports economy and the severe impact of the current Covid-19 pandemic on it, this paper examines the way in which the infectious disease stock market volatility (ID-EMV) tracker affects the Covid world sports economy from an asymmetrical perspective. We selected the newspaperbased ID-EMV index and Wind Physical Education Concept Index (PEC) for our research. First, the results of conventional causality tests showed that the tests ID-EMV and PEC were unable to detect causality, implying that stock market volatility stemming from COVID-19 risk had no impact on the sports economy. However, considering potential asymmetric effects in this relationship, we further investigated whether ID-EMV could significantly affect PEC under both positive and negative shocks. The empirical results confirm the existence of asymmetric effects. Therefore, we are the first to focus on this asymmetric effect and conduct empirical research, which may help provide educators and financial market participants with a novel research perspective

    Similarity of DMD gene deletion and duplication in the Chinese patients compared to global populations

    Get PDF
    © 2008 Wang et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licens

    Federated NLP in Few-shot Scenarios

    Full text link
    Natural language processing (NLP) sees rich mobile applications. To support various language understanding tasks, a foundation NLP model is often fine-tuned in a federated, privacy-preserving setting (FL). This process currently relies on at least hundreds of thousands of labeled training samples from mobile clients; yet mobile users often lack willingness or knowledge to label their data. Such an inadequacy of data labels is known as a few-shot scenario; it becomes the key blocker for mobile NLP applications. For the first time, this work investigates federated NLP in the few-shot scenario (FedFSL). By retrofitting algorithmic advances of pseudo labeling and prompt learning, we first establish a training pipeline that delivers competitive accuracy when only 0.05% (fewer than 100) of the training data is labeled and the remaining is unlabeled. To instantiate the workflow, we further present a system FFNLP, addressing the high execution cost with novel designs. (1) Curriculum pacing, which injects pseudo labels to the training workflow at a rate commensurate to the learning progress; (2) Representational diversity, a mechanism for selecting the most learnable data, only for which pseudo labels will be generated; (3) Co-planning of a model's training depth and layer capacity. Together, these designs reduce the training delay, client energy, and network traffic by up to 46.0×\times, 41.2×\times and 3000.0×\times, respectively. Through algorithm/system co-design, FFNLP demonstrates that FL can apply to challenging settings where most training samples are unlabeled

    Towards Practical Few-shot Federated NLP

    Full text link
    Transformer-based pre-trained models have emerged as the predominant solution for natural language processing (NLP). Fine-tuning such pre-trained models for downstream tasks often requires a considerable amount of labeled private data. In practice, private data is often distributed across heterogeneous mobile devices and may be prohibited from being uploaded. Moreover, well-curated labeled data is often scarce, presenting an additional challenge. To address these challenges, we first introduce a data generator for federated few-shot learning tasks, which encompasses the quantity and skewness of scarce labeled data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a prompt-based federated learning system that exploits abundant unlabeled data for data augmentation. Our experiments indicate that AUG-FedPrompt can perform on par with full-set fine-tuning with a limited amount of labeled data. However, such competitive performance comes at a significant system cost.Comment: EuroSys23 worksho
    corecore